04/25/2018
Linear mixed model fit by REML ['lmerMod']
Formula: Reaction ~ 1 + Days + (1 + Days | Subject)
Data: sleepstudy
REML criterion at convergence: 1744
Scaled residuals:
Min 1Q Median 3Q Max
-3.954 -0.463 0.023 0.463 5.179
Random effects:
Groups Name Variance Std.Dev. Corr
Subject (Intercept) 612.1 24.74
Days 35.1 5.92 0.07
Residual 654.9 25.59
Fixed effects:
Estimate Std. Error t value
(Intercept) 251.41 6.82 36.8
Days 10.47 1.55 6.8
Correlation of Fixed Effects:
(Intr)
Days -0.138
model.full.ml <- update(model.full,REML=FALSE)
Formula: Reaction ~ 1 + Days + (1 + Days | Subject)
Data: sleepstudy
AIC BIC logLik deviance df.resid
1764 1783 -876 1752 174
Scaled residuals:
Min 1Q Median 3Q Max
-3.942 -0.466 0.029 0.464 5.179
Random effects:
Groups Name Variance Std.Dev. Corr
Subject (Intercept) 565.5 23.78
Days 32.7 5.72 0.08
Residual 654.9 25.59
Number of obs: 180, groups: Subject, 18
Fixed effects:
Estimate Std. Error t value
(Intercept) 251.41 6.63 37.9
Days 10.47 1.50 7.0
Correlation of Fixed Effects:
(Intr)
Days -0.138
lme4 has some built-in protections to prevent REML-based comparisons, but don't depend on these!m1 <- update(model.intercepts, REML=F) m2 <- update(model.full, REML=F) anova(m1,m2)
## Data: sleepstudy ## Models: ## m1: Reaction ~ 1 + Days + (1 | Subject) ## m2: Reaction ~ 1 + Days + (1 + Days | Subject) ## Df AIC BIC logLik deviance Chisq Chi Df Pr(>Chisq) ## m1 4 1802 1815 -897 1794 ## m2 6 1764 1783 -876 1752 42.1 2 7.1e-10 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
m1 <- update(model.slopes, REML=F) m2 <- update(model.full, REML=F) anova(m1,m2)
## Data: sleepstudy ## Models: ## m1: Reaction ~ 1 + Days + (0 + Days | Subject) ## m2: Reaction ~ 1 + Days + (1 + Days | Subject) ## Df AIC BIC logLik deviance Chisq Chi Df Pr(>Chisq) ## m1 4 1782 1795 -887 1774 ## m2 6 1764 1783 -876 1752 22.1 2 0.000016 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
\[\sigma^2 \begin{bmatrix} 1.0 & \rho & \rho & \rho \\ & 1.0 & \rho & \rho \\ & & 1.0 & \rho \\ & & & 1.0 \end{bmatrix} = \begin{bmatrix} \sigma_b^2+\sigma_e^2 & \sigma_b^2 & \sigma_b^2 & \sigma_b^2 \\ & \sigma_b^2+\sigma_e^2 & \sigma_b^2 & \sigma_b^2 \\ & & \sigma_b^2+\sigma_e^2 & \sigma_b^2 \\ & & & \sigma_b^2+\sigma_e^2 \end{bmatrix}\]
\[\sigma^2 \begin{bmatrix} 1.0 & \rho & \rho^2 & \rho^3 \\ & 1.0 & \rho & \rho^2 \\ & & 1.0 & \rho \\ & & & 1.0 \end{bmatrix}\]
\[\sigma^2 \begin{bmatrix} 1.0 & \rho & \rho^2 & \rho^3 \\ & 1.0 & \rho & \rho^2 \\ & & 1.0 & \rho \\ & & & 1.0 \end{bmatrix}\]
\[\sigma^2 \begin{bmatrix} 1.0 & \rho^{\frac{|t_1-t_2|}{|t_1-t_2|}} & \rho^{\frac{|t_1-t_3|}{|t_1-t_2|}} & \rho^{\frac{|t_1-t_4|}{|t_1-t_2|}} \\ & 1.0 & \rho^{\frac{|t_2-t_3|}{|t_1-t_2|}} & \rho^{\frac{|t_2-t_4|}{|t_1-t_2|}} \\ & & 1.0 & \rho^{\frac{|t_3-t_4|}{|t_1-t_2|}} \\ & & & 1.0 \end{bmatrix}\]
\[\sigma^2 \begin{bmatrix} 1.0 & \rho^{\frac{|t_1-t_2|}{|t_1-t_2|}} & \rho^{\frac{|t_1-t_3|}{|t_1-t_2|}} & \rho^{\frac{|t_1-t_4|}{|t_1-t_2|}} \\ & 1.0 & \rho^{\frac{|t_2-t_3|}{|t_1-t_2|}} & \rho^{\frac{|t_2-t_4|}{|t_1-t_2|}} \\ & & 1.0 & \rho^{\frac{|t_3-t_4|}{|t_1-t_2|}} \\ & & & 1.0 \end{bmatrix}\]
\[\begin{bmatrix} \sigma_1^2 & \sigma_{12} & \sigma_{13} & \sigma_{14} \\ & \sigma_2^2 & \sigma_{23} & \sigma_{24} \\ & & \sigma_3^2 & \sigma_{34}\\ & & & \sigma_4^2 \end{bmatrix}\]
lme4 package and the lmer() function then we do not need to suggest a correlation structure for these models.nlme package and the lme() function, then we can specify the covariance structure and compare the models using LRT in order to see which covariance structure works the best.gpa_lm = lm(gpa ~ occasion, data=gpa)
| Â | Estimate | Std. Error | t value | Pr(>|t|) |
|---|---|---|---|---|
| (Intercept) | 2.599 | 0.018 | 145.7 | 0 |
| occasion | 0.106 | 0.006 | 18.04 | 0 |
| Observations | Residual Std. Error | \(R^2\) | Adjusted \(R^2\) |
|---|---|---|---|
| 1200 | 0.3487 | 0.2136 | 0.2129 |
library(lme4) gpa_mixed = lmer(gpa ~ occasion + (1|student), data=gpa) summary(gpa_mixed)
| term | estimate | std.error | statistic |
|---|---|---|---|
| (Intercept) | 2.599 | 0.022 | 119.800 |
| occasion | 0.106 | 0.004 | 26.096 |
| grp | variance | sd |
|---|---|---|
| student | 0.064 | 0.252 |
| Residual | 0.058 | 0.241 |
confint(gpa_mixed)
| 2.5% | 97.5% | |
|---|---|---|
| student | 0.225 | 0.282 |
| residual | 0.231 | 0.252 |
| Intercept | 2.557 | 2.642 |
| occasion | 0.098 | 0.114 |
ranef(gpa_mixed)$student %>% head(5) coef(gpa_mixed)$student %>% head(5)
| (Intercept) |
|---|
| -0.071 |
| -0.216 |
| 0.088 |
| -0.187 |
| 0.030 |
| (Intercept) | occasion |
|---|---|
| 2.528 | 0.106 |
| 2.384 | 0.106 |
| 2.687 | 0.106 |
| 2.413 | 0.106 |
| 2.630 | 0.106 |
re.form=NA is ignoring the random effectspredict_no_re = predict(gpa_mixed, re.form=NA) predict_lm = predict(gpa_lm)
gpa_mixed = lmer(gpa ~ occasion + (1 + occasion|student), data=gpa) summary(gpa_mixed)
| term | estimate | std.error | statistic | conf.low | conf.high |
|---|---|---|---|---|---|
| (Intercept) | 2.599 | 0.018 | 141.592 | 2.563 | 2.635 |
| occasion | 0.106 | 0.006 | 18.066 | 0.095 | 0.118 |
| grp | re | variance | sd |
|---|---|---|---|
| student | (Intercept) | 0.045 | 0.213 |
| student | occasion | 0.005 | 0.067 |
| Residual | 0.042 | 0.206 |
m1 <- lmer(gpa ~ occasion + (1|student), data=gpa, REML=F) m2 <- lmer(gpa ~ occasion + (1 + occasion|student), data=gpa, REML=F) anova(m1,m2)
## Data: gpa ## Models: ## m1: gpa ~ occasion + (1 | student) ## m2: gpa ~ occasion + (1 + occasion | student) ## Df AIC BIC logLik deviance Chisq Chi Df Pr(>Chisq) ## m1 4 402 422 -197 394 ## m2 6 258 289 -123 246 147 2 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1